The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, concern with the moral behavior of artificial moral agents (AMAs).
Contents |
The term "roboethics" was coined by roboticist Gianmarco Veruggio in 2002, referring to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights are the moral obligations of society towards its machines, similar to human rights or animal rights. These may include the right to life and liberty, freedom of thought and expression and equality before the law.[2] The issue has been considered by the Institute for the Future[3] and by the U.K. Department of Trade and Industry.[4]
A key issue is sentience, or the ability to feel pleasure and pain. For advocates of animal rights, sentience is the distinguishing feature that forces society to honor the rights of animals. Using an argument analogous to the one used by animal rights advocates, a computer program or robot that is able to feel pleasure and pain has rights. However, there is no universally accepted definition of sentience (or related aspects such as consciousness) for machines.
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[4] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[5] Ray Kurzweil sets the date at 2029.[6] However, most scientists suppose that at least 50 years may have to pass before any sufficiently advanced system exists.[7][8]
The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[9]
Aleksandr Solzhenitsyn's The First Circle describes the use of speech recognition technology in the service of tyranny.[10] If an AI program exists that can understand speech and natural languages (e.g. English), then, with adequate processing power it could theoretically listen to every phone conversation and read every email in the world, understand them and report back to the program's operators exactly what is said and exactly who is saying it. An AI program like this could allow governments or other entities to efficiently suppress dissent and attack their enemies.
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[11]
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[11] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.[11]
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[12][13][14][15]
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[16]
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other in searching out a beneficial resource and avoiding a poisonous one eventually learned to lie to each other in an attempt to hoard the beneficial resource.[17] One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).[18]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[19] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[20][21] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[22] They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[23] He suggests that it may be somewhat or possibly very dangerous for humans.[24] This is discussed by a philosophy called Singularitarianism. The Singularity Institute for Artificial Intelligence has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[25]
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[23]
In Moral Machines: Teaching Robots Right from Wrong,[26] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis)[27], while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[18]
Several critics have argued that AI technology has the potential to disrupt existing society and introduce new dangers and malaise. Nick Bostrom, Teacher and Philosopher at Oxford University, published a paper "Existential Risks" In the Journal of Evolution and Technology. Mr. Bostrom stated that Artificial Intelligence has the capability to bring about human extinction, which is of course not what society intends for Artificial Intelligence to do.
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.